本文通过离线数据在两人零和马尔可夫游戏中学习NASH Equilibria的进展。具体而言,考虑使用$ S $州的$ \ gamma $ discousped Infinite-Horizo​​n Markov游戏,其中Max-player具有$ $ ACTIVE,而Min-player具有$ B $ Actions。我们提出了一种基于悲观模型的算法,具有伯恩斯坦风格的较低置信界(称为VI-LCB游戏),事实证明,该算法可以找到$ \ varepsilon $ - approximate-approximate nash平衡,带有样品复杂性,不大于$ \ frac {c_ {c_ {c_ {c_ { \ Mathsf {剪切}}}^{\ star} s(a+b)} {(1- \ gamma)^{3} \ varepsilon^{2}} $(最多到某个log factor)。在这里,$ c _ {\ mathsf {剪切}}}^{\ star} $是一些单方面剪接的浓缩系数,反映了可用数据的覆盖范围和分配变化(vis- \`a-vis目标数据),而目标是目标精度$ \ varepsilon $可以是$ \ big(0,\ frac {1} {1- \ gamma} \ big] $的任何值。我们的样本复杂性绑定了先前的艺术,以$ \ min \ {a, b \} $,实现整个$ \ varepsilon $ range的最小值最佳性。我们结果的一个吸引力的功能在于算法简单性,这揭示了降低方差降低和样本拆分的不必要性。
translated by 谷歌翻译
维度学习(RL)的诅咒是一种广为人知的问题。在表格设置中,状态空间$ \ mathcal {s} $和动作空间$ \ mathcal {a} $均为有限的,以获得与生成模型的采样访问的几乎最佳的政策,最低限度的最佳样本复杂度尺度用$ | \ mathcal {s} | \ times | \ mathcal {a} | $,它在$ \ mathcal {s} $或$ \ mathcal {a} $很大。本文考虑了Markov决策过程(MDP),该过程承认一组状态操作功能,该功能可以线性地表达(或近似)其概率转换内核。我们展示了基于模型的方法(RESP。$〜$ Q-Learning)可否在样本大小超过订单时,通过高概率可以获得高概率的$ \ varepsilon $ -optimal策略(RESP。$〜$ q-function) $ \ frac {k} {(1- \ gamma)^ {3} \ varepsilon ^ {2}} $(resp. $〜$$ \ frac {k} {(1- \ gamma)^ {4} varepsilon ^ {2}} $),直到一些对数因子。在这里,$ k $是特征尺寸和$ \ gamma \ IN(0,1)$是MDP的折扣系数。两个样本复杂性界限都是可透明的,我们对基于模型的方法的结果匹配最低限度的下限。我们的结果表明,对于任意大规模的MDP来说,基于模型的方法和Q-Learning都是在$ K $相对较小的时候进行样本效率,因此本文的标题。
translated by 谷歌翻译
本文考虑了一个规范聚类问题,其中一个人从两个椭圆分布的平衡混合物中获取未标记的样本,并旨在估计标签的分类器。许多流行的方法包括PCA和K-Meanse需要混合物的各个组分在稍微球形,并且在拉伸时表现不佳。为了克服这个问题,我们提出了一个非凸面的程序寻求仿射变换,将数据转换为一维点云集中在$ -1 $和1美元之后,之后群集变得容易。我们的理论贡献是两倍:(1)我们表明,当样品大小超过维度的一些恒定倍数时,非凸损耗功能表现出理想的几何特性,以及(2)我们利用这一点,以证明这是一个有效的第一 - 订单算法在没有良好的初始化的情况下实现了近最佳统计精度。我们还提出了一般的方法,用于聚类,具有灵活的特征变换和损失目标。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译